Skip to content

fix(storage): rate limit bucket deletion in cleanup#5612

Merged
dbolduc merged 4 commits into
googleapis:mainfrom
abhinavgautam01:fix/rate-limit-bucket-deletion
May 12, 2026
Merged

fix(storage): rate limit bucket deletion in cleanup#5612
dbolduc merged 4 commits into
googleapis:mainfrom
abhinavgautam01:fix/rate-limit-bucket-deletion

Conversation

@abhinavgautam01
Copy link
Copy Markdown
Contributor

Fixes #5219

Problem

When there are many stale buckets in integration tests, the cleanup process
was deleting them in parallel and exceeding GCP's Storage API rate limit
(approximately one request every two seconds).

Solution

Serialize bucket deletion by removing parallel spawning (tokio::spawn and
join_all) and instead delete buckets sequentially with a 2-second delay
between each deletion to respect the API rate limit.

Changes

  • Changed cleanup_stale_buckets() to delete buckets sequentially
  • Added 2-second delay between bucket deletions
  • Maintains proper error handling for each deletion attempt
  • Delay is not applied after the last bucket (no unnecessary wait)

Testing

This change should prevent rate limit errors during stale bucket cleanup
in integration tests.

@abhinavgautam01 abhinavgautam01 requested review from a team as code owners May 7, 2026 11:15
@product-auto-label product-auto-label Bot added the api: storage Issues related to the Cloud Storage API. label May 7, 2026
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request modifies the cleanup_stale_buckets function in the storage examples to serialize bucket deletions. The implementation replaces concurrent task spawning with a sequential loop that introduces a 2-second delay between deletions to comply with GCP rate limits. Feedback was provided to replace println! calls with the tracing crate to align with the repository's structured logging standards.

Comment thread src/storage/examples/src/lib.rs Outdated
@abhinavgautam01 abhinavgautam01 force-pushed the fix/rate-limit-bucket-deletion branch from 9c33688 to 1294c4a Compare May 7, 2026 11:35
Comment thread src/storage/examples/src/lib.rs Outdated
Comment thread src/storage/examples/src/lib.rs
@abhinavgautam01 abhinavgautam01 requested a review from coryan May 9, 2026 16:58
@coryan
Copy link
Copy Markdown
Contributor

coryan commented May 9, 2026

/gcbrun

@codecov
Copy link
Copy Markdown

codecov Bot commented May 9, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 97.92%. Comparing base (aaabe4e) to head (15dadf5).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #5612   +/-   ##
=======================================
  Coverage   97.92%   97.92%           
=======================================
  Files         221      221           
  Lines       52759    52759           
=======================================
+ Hits        51662    51663    +1     
+ Misses       1097     1096    -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@abhinavgautam01
Copy link
Copy Markdown
Contributor Author

ping @coryan

@coryan
Copy link
Copy Markdown
Contributor

coryan commented May 11, 2026

/gcbrun

@coryan
Copy link
Copy Markdown
Contributor

coryan commented May 11, 2026

Formatting is failing with:

diff --git a/src/storage/examples/src/lib.rs b/src/storage/examples/src/lib.rs
index 6c42795..22eedff 100644
--- a/src/storage/examples/src/lib.rs
+++ b/src/storage/examples/src/lib.rs
@@ -23,10 +23,8 @@ use google_cloud_gax::options::RequestOptionsBuilder;
 use google_cloud_gax::paginator::ItemPaginator as _;
 use google_cloud_gax::throttle_result::ThrottleResult;
 use google_cloud_gax::{
-    backoff_policy::BackoffPolicy,
-    error::rpc::Code,
-    exponential_backoff::ExponentialBackoffBuilder,
-    retry_policy::RetryPolicyExt,
+    backoff_policy::BackoffPolicy, error::rpc::Code,
+    exponential_backoff::ExponentialBackoffBuilder, retry_policy::RetryPolicyExt,
     retry_state::RetryState,
 };
 use google_cloud_storage::client::{Storage, StorageControl};

Remember to always use cargo format from the top-level directory so the configuration takes effect. I think in this case cargo format -p storage-samples would fix the problem.

@abhinavgautam01
Copy link
Copy Markdown
Contributor Author

Formatting is failing with:

diff --git a/src/storage/examples/src/lib.rs b/src/storage/examples/src/lib.rs
index 6c42795..22eedff 100644
--- a/src/storage/examples/src/lib.rs
+++ b/src/storage/examples/src/lib.rs
@@ -23,10 +23,8 @@ use google_cloud_gax::options::RequestOptionsBuilder;
 use google_cloud_gax::paginator::ItemPaginator as _;
 use google_cloud_gax::throttle_result::ThrottleResult;
 use google_cloud_gax::{
-    backoff_policy::BackoffPolicy,
-    error::rpc::Code,
-    exponential_backoff::ExponentialBackoffBuilder,
-    retry_policy::RetryPolicyExt,
+    backoff_policy::BackoffPolicy, error::rpc::Code,
+    exponential_backoff::ExponentialBackoffBuilder, retry_policy::RetryPolicyExt,
     retry_state::RetryState,
 };
 use google_cloud_storage::client::{Storage, StorageControl};

Remember to always use cargo format from the top-level directory so the configuration takes effect. I think in this case cargo format -p storage-samples would fix the problem.

Good catch — ran the formatter from the repo root with cargo fmt -p storage-samples so the workspace rustfmt config applies; it collapsed the google_cloud_gax import list to match the diff you pasted.

Comment thread src/storage/examples/src/lib.rs Outdated
Comment thread src/storage/examples/src/lib.rs
Comment thread src/storage/examples/src/lib.rs Outdated
Comment thread src/storage/examples/src/lib.rs Outdated
@abhinavgautam01 abhinavgautam01 requested a review from coryan May 11, 2026 14:29
@coryan
Copy link
Copy Markdown
Contributor

coryan commented May 11, 2026

/gcbrun

   Serialize bucket deletion to respect GCP API rate limit (~1 request per 2 seconds).
   Uses structured logging with tracing crate for consistency with repository standards.
…backoff

  Replace fixed inter-delete sleeps with exponential backoff on retryable
  DeleteBucket errors (initial delay >= 2s). Empty stale buckets in parallel,
  then delete buckets sequentially. Add comments explaining multi-worker rate limits.
- Replace manual delete retry loop with with_backoff_policy + idempotency
- Restore GC label comment in empty_bucket_contents
- Group cleanup_bucket with other cleanup helpers; ASCII doc comment
@coryan coryan force-pushed the fix/rate-limit-bucket-deletion branch from 21dc199 to 15dadf5 Compare May 12, 2026 01:16
@coryan
Copy link
Copy Markdown
Contributor

coryan commented May 12, 2026

/gcbrun

@coryan
Copy link
Copy Markdown
Contributor

coryan commented May 12, 2026

This is looking good, I need a review from somebody in the @googleapis/gcs-team

@dbolduc dbolduc merged commit d79ad8c into googleapis:main May 12, 2026
35 of 36 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: storage Issues related to the Cloud Storage API.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ci: rate limit storage bucket deletion

3 participants